The goals / steps of this project are the following:
The code for this step is contained in the first code cell of the IPython notebook located in “P4.ipynb”.
I start by preparing “object points”, which will be the (x, y, z) coordinates of the chessboard corners in the world. Here I am assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image. Thus, objp is just a replicated array of coordinates, and objpoints will be appended with a copy of it every time I successfully detect all chessboard corners in a test image. imgpoints will be appended with the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection.
I then used the output objpoints and imgpoints to compute the camera calibration and distortion coefficients using the cv2.calibrateCamera() function. I applied this distortion correction to the test image using the cv2.undistort() function and obtained this result:
alt text
To demonstrate this step, I will describe how I apply the distortion correction to one of the test images like this one:
cv2.undistort with the camera matrix and distortion coefficients obtained in Camera Calibration Step.alt text
By converting the undistorted image to HLS color space and separate the S channel we can see that the S channel is doing a fairly robust job of picking up the lines under very different color and changing conditions, while the other selections look messy. I used a combination of color and gradient thresholds to generate a binary image . Here’s an example of my output for these steps. (note: this is not actually from one of the test images)
alt text
The code for my perspective transform is contained in the fourth code cell of the IPython notebook located in “P4.ipynb”. The M = cv2.getPerspectiveTransform() takes as source (src) and destination (dst) points. I chose the hardcode the source and destination points in the following manner:
M that maps them to each other using cv2.getPerspective.cv2.warpPerspective and the perspective transform matrix MThis resulted in the following source and destination points:
Src Points:
[ 190. 720.]
[ 589. 457.]
[ 698. 457.]
[ 1145. 720.]
Dst Points:
[ 325. 720.]
[ 325. 0.]
[ 1010. 0.]
[ 1010. 720.]
I verified that my perspective transform was working as expected by drawing the src and dst points onto a test image and its warped counterpart to verify that the lines appear parallel in the warped image.
alt text
Then I did some other stuff and fit my lane lines with a 2nd order polynomial kinda like this with following steps:
cv2.fillPoly().
The code for calculated the radius of curvature of the lane and the position of the vehicle is contained in the sixth code cell of the IPython notebook located in “P4.ipynb”.
right_curverad = ((1 + (2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0]) `position_from_center = ((x_left_pix + x_right_pix)/2 - midx) * xm_per_pixI implemented these steps in the 7th code cell of the IPython notebook located in “P4.ipynb”:
cv2.fillPoly().cv2.warpPerspective().cv2.addWeighted().Here is an example of my result on a test image:
alt text
To smooth the output, updating curvature and polynomials using weighted average with last frame of previously detected lines in the video_pipeline function. Here’s a link to my video result.
from moviepy.editor import VideoFileClip
output = 'Test.mp4'
clip1 = VideoFileClip("project_video.mp4")
output_clip = clip1.fl_image(process_frame)
%time output_clip.write_videofile(output, audio=False)
Warp the image to the birds-eye-view perspective and fitting lane lines by polynomial is a promising method which may find not only straight lane lines but also curved ones although it’s not robust enough to deal with complex environment like the Valley Track in the Behavior cloning project such as shadow, brightness, traffic signs, white lane line with a bright background, appearing, for example, under trees, may lead to noisy results. By applying HLS and sobel masks to the image, I want to have better results but additional issues could happen due to poor condition of road marking, intersection of different lines and as you can see in the challenge.mp4. A fix for that could be checking the distance between the two detected lane lines. Filtering out lines if they are not the correct distance apart like 3.7 meters, will filter out other lines like those in the challenge video may partly be resolved by additional line filtering between video frames. I would like to implement video filtering methods to make it more robust on the harder_challenge_video.mp4 as the future work